83 research outputs found

    Efficacy of Radiomics and Genomics in Predicting TP53 Mutations in Diffuse Lower Grade Glioma

    Get PDF
    An updated classification of diffuse lower-grade gliomas is established in the 2016 World Health Organization Classification of Tumors of the Central Nervous System based on their molecular mutations such as TP53 mutation. This study investigates machine learning methods for TP53 mutation status prediction and classification using radiomics and genomics features, respectively. Radiomics features represent patients\u27 age and imaging features that are extracted from conventional MRI. Genomics feature is represented by patients’ gene expression using RNA sequencing. This study uses a total of 105 LGG patients, where the patient dataset is divided into a training set (80 patients) and testing set (25 patients). Three TP53 mutation prediction models are constructed based on the source of the training features; TP53-radiomics model, TP53-genomics model, and TP53-radiogenomics model, respectively. Radiomics feature selection is performed using recursive feature selection method. For genomics data, EdgeR method is utilized to select the differentially expressed genes between the mutated TP53 versus the non-mutated TP53 cases in the training set. The training classification model is constructed using Random Forest and cross-validated using repeated 10-fold cross validation. Finally, the predictive performance of the three models is assessed using the testing set. The three models, TP53-Radiomics, TP53-RadioGenomics, and TP53-Genomics, achieve a predictive accuracy of 0.84±0.04, 0.92±0.04, and 0.89±0.07, respectively. These results show promise of non-invasive MRI radiomics features and fusion of radiomics with genomics features for prediction of TP53

    Class Activation Mapping and Uncertainty Estimation in Multi-Organ Segmentation

    Get PDF
    Deep learning (DL)-based medical imaging and image segmentation algorithms achieve impressive performance on many benchmarks. Yet the efficacy of deep learning methods for future clinical applications may become questionable due to the lack of ability to reason with uncertainty and interpret probable areas of failures in prediction decisions. Therefore, it is desired that such a deep learning model for segmentation classification is able to reliably predict its confidence measure and map back to the original imaging cases to interpret the prediction decisions. In this work, uncertainty estimation for multiorgan segmentation task is evaluated to interpret the predictive modeling in DL solutions. We use the state-of-the-art nnU-Net to perform segmentation of 15 abdominal organs (spleen, right kidney, left kidney, gallbladder, esophagus, liver, stomach, aorta, inferior vena cava, pancreas, right adrenal gland, left adrenal gland, duodenum, bladder, prostate/uterus) using 200 patient cases for the Multimodality Abdominal Multi-Organ Segmentation Challenge 2022. Further, the softmax probabilities from different variants of nnU-Net are used to compute the knowledge uncertainty in the deep learning framework. Knowledge uncertainty from ensemble of DL models is utilized to quantify and visualize class activation map for two example segmented organs. The preliminary result of our model shows that class activation maps may be used to interpret the prediction decision made by the DL model used in this study

    Special Section Guest Editorial: Machine Learning In Optics

    Get PDF
    This guest editorial summarizes the Special Section on Machine Learning in Optics

    Two-Stage Transfer Learning for Facial Expression Classification in Children

    Get PDF
    Studying facial expressions can provide insight into the development of social skills in children and provide support to individuals with developmental disorders. In afflicted individuals, such as children with Autism Spectrum Disorder (ASD), atypical interpretations of facial expressions are well-documented. In computer vision, many popular and state-of-the-art deep learning architectures (VGG16, EfficientNet, ResNet, etc.) are readily available with pre-trained weights for general object recognition. Transfer learning utilizes these pre-trained models to improve generalization on a new task. In this project, transfer learning is implemented to leverage the pretrained model (general object recognition) on facial expression classification. Though this method, the base and middle layers are preserved to exploit the existing neural architecture. The investigated method begins with a base-packaged architecture trained on ImageNet. This foundation is then task changed from general object classification to facial expression classification in the first transfer learning step. The second transfer learning step performs a domain change from adult to child data. Finally, the trained network is evaluated on the child facial expression classification task

    Monocular Camera Viewpoint-Invariant Vehicular Traffic Segmentation and Classification Utilizing Small Datasets

    Get PDF
    The work presented here develops a computer vision framework that is view angle independent for vehicle segmentation and classification from roadway traffic systems installed by the Virginia Department of Transportation (VDOT). An automated technique for extracting a region of interest is discussed to speed up the processing. The VDOT traffic videos are analyzed for vehicle segmentation using an improved robust low-rank matrix decomposition technique. It presents a new and effective thresholding method that improves segmentation accuracy and simultaneously speeds up the segmentation processing. Size and shape physical descriptors from morphological properties and textural features from the Histogram of Oriented Gradients (HOG) are extracted from the segmented traffic. Furthermore, a multi-class support vector machine classifier is employed to categorize different traffic vehicle types, including passenger cars, passenger trucks, motorcycles, buses, and small and large utility trucks. It handles multiple vehicle detections through an iterative k-means clustering over-segmentation process. The proposed algorithm reduced the processed data by an average of 40%. Compared to recent techniques, it showed an average improvement of 15% in segmentation accuracy, and it is 55% faster than the compared segmentation techniques on average. Moreover, a comparative analysis of 23 different deep learning architectures is presented. The resulting algorithm outperformed the compared deep learning algorithms for the quality of vehicle classification accuracy. Furthermore, the timing analysis showed that it could operate in real-time scenarios

    Innovative Computing in Engineering and Medicine II

    Get PDF
    Chairs: Drs. Khan Iftekharuddin, Dean Krusienski, & Jiang Li, Department of Electrical and Computer Engineerin

    Innovative Computing in Engineering and Medicine I

    Get PDF
    Chairs: Drs. Chung-Hao Chen, Khan Iftekharuddin, & Christian Zemlin, Department of Electrical and Computer Engineerin

    Comparison of Machine Learning Methods for Classification of Alexithymia in Individuals With and Without Autism from Eye-Tracking Data

    Get PDF
    Alexithymia describes a psychological state where individuals struggle with feeling and expressing their emotions. Individuals with alexithymia may also have a more difficult time understanding the emotions of others and may express atypical attention to the eyes when recognizing emotions. This is known to affect individuals with Autism Spectrum Disorder (ASD) differently than neurotypical (NT) individuals. Using a public data set of eye-tracking data from seventy individuals with and without autism who have been assessed for alexithymia, we train multiple traditional machine learning models for alexithymia classification including support vector machines, logistic regression, decision trees, random forest, and multilayer perceptron. To correct for class imbalance, we evaluate four different oversampling strategies: no oversampling, random oversampling, SMOTE, and ADASYN. We consider three different groups of data: ASD, NT, and combined ASD+NT. We use a nested leave-one-out cross validation strategy to perform hyperparameter selection and evaluate model performance. We achieve F1 scores of 90.00% and 51.85% using decision trees for ASD and NT groups, respectively, and 72.41% using SVM for the combined ASD+NT group. Splitting the data into ASD and NT groups improves recall for both groups compared to the combined model
    • …
    corecore